393 research outputs found

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Dance training shapes action perception and its neural implementation within the young and older adult brain

    Get PDF
    How we perceive others in action is shaped by our prior experience. Many factors influence brain responses when observing others in action, including training in a particular physical skill, such as sport or dance, and also general development and aging processes. Here, we investigate how learning a complex motor skill shapes neural and behavioural responses among a dance-naïve sample of 20 young and 19 older adults. Across four days, participants physically rehearsed one set of dance sequences, observed a second set, and a third set remained untrained. Functional MRI was obtained prior to and immediately following training. Participants’ behavioural performance on motor and visual tasks improved across the training period, with younger adults showing steeper performance gains than older adults. At the brain level, both age groups demonstrated decreased sensorimotor cortical engagement after physical training, with younger adults showing more pronounced decreases in inferior parietal activity compared to older adults. Neural decoding results demonstrate that among both age groups, visual and motor regions contain experience-specific representations of new motor learning. By combining behavioural measures of performance with univariate and multivariate measures of brain activity, we can start to build a more complete picture of age-related changes in experience-dependent plasticity

    From automata to animate beings: the scope and limits of attributing socialness to artificial agents

    Get PDF
    Understanding the mechanisms and consequences of attributing socialness to artificial agents has important implications for how we can use technology to lead more productive and fulfilling lives. Here, we integrate recent findings on the factors that shape behavioral and brain mechanisms that support social interactions between humans and artificial agents. We review how visual features of an agent, as well as knowledge factors within the human observer, shape attributions across dimensions of socialness. We explore how anthropomorphism and dehumanization further influence how we perceive and interact with artificial agents. Based on these findings, we argue that the cognitive reconstruction within the human observer is likely to be far more crucial in shaping our interactions with artificial agents than previously thought, while the artificial agent's visual features are possibly of lesser importance. We combine these findings to provide an integrative theoretical account based on the “like me” hypothesis, and discuss the key role played by the Theory‐of‐Mind network, especially the temporal parietal junction, in the shift from mechanistic to social attributions. We conclude by highlighting outstanding questions on the impact of long‐term interactions with artificial agents on the behavioral and brain mechanisms of attributing socialness to these agents

    Social cognition in the age of human–robot interaction

    Get PDF
    Artificial intelligence advances have led to robots endowed with increasingly sophisticated social abilities. These machines speak to our innate desire to perceive social cues in the environment, as well as the promise of robots enhancing our daily lives. However, a strong mismatch still exists between our expectations and the reality of social robots. We argue that careful delineation of the neurocognitive mechanisms supporting human–robot interaction will enable us to gather insights critical for optimising social encounters between humans and robots. To achieve this, the field must incorporate human neuroscience tools including mobile neuroimaging to explore long-term, embodied human–robot interaction in situ. New analytical neuroimaging approaches will enable characterisation of social cognition representations on a finer scale using sensitive and appropriate categorical comparisons (human, animal, tool, or object). The future of social robotics is undeniably exciting, and insights from human neuroscience research will bring us closer to interacting and collaborating with socially sophisticated robots

    Anodal tDCS over Primary Motor Cortex Provides No Advantage to Learning Motor Sequences via Observation.

    Get PDF
    When learning a new motor skill, we benefit from watching others. It has been suggested that observation of others' actions can build a motor representation in the observer, and as such, physical and observational learning might share a similar neural basis. If physical and observational learning share a similar neural basis, then motor cortex stimulation during observational practice should similarly enhance learning by observation as it does through physical practice. Here, we used transcranial direct-current stimulation (tDCS) to address whether anodal stimulation to M1 during observational training facilitates skill acquisition. Participants learned keypress sequences across four consecutive days of observational practice while receiving active or sham stimulation over M1. The results demonstrated that active stimulation provided no advantage to skill learning over sham stimulation. Further, Bayesian analyses revealed evidence in favour of the null hypothesis across our dependent measures. Our findings therefore provide no support for the hypothesis that excitatory M1 stimulation can enhance observational learning in a similar manner to physical learning. More generally, the results add to a growing literature that suggests that the effects of tDCS tend to be small, inconsistent, and hard to replicate. Future tDCS research should consider these factors when designing experimental procedures.This work was supported by the Ministry of Defence of the United Kingdom Defence Science and Technology Laboratory (Grant no. DSTLX-1000083177 to Emily S. Cross and Richard Ramsey), the Economic and Social Research Council (Grant no. ES/K001884/1 to Richard Ramsey and ES/K001892/1 to Emily S. Cross), and the funding from the European Commission to Emily S. Cross (CIG11- 2012-322256 and ERC-2015-STG-677270)

    Observing Action Sequences Elicits Sequence-Specific Neural Representations in Frontoparietal Brain Regions.

    Get PDF
    Learning new skills by watching others is important for social and motor development throughout the lifespan. Prior research has suggested that observational learning shares common substrates with physical practice at both cognitive and brain levels. In addition, neuroimaging studies have used multivariate analysis techniques to understand neural representations in a variety of domains, including vision, audition, memory, and action, but few studies have investigated neural plasticity in representational space. Therefore, although movement sequences can be learned by observing other people's actions, a largely unanswered question in neuroscience is how experience shapes the representational space of neural systems. Here, across a sample of male and female participants, we combined pretraining and posttraining fMRI sessions with 6 d of observational practice to determine whether the observation of action sequences elicits sequence-specific representations in human frontoparietal brain regions and the extent to which these representations become more distinct with observational practice. Our results showed that observed action sequences are modeled by distinct patterns of activity in frontoparietal cortex and that such representations largely generalize to very similar, but untrained, sequences. These findings advance our understanding of what is modeled during observational learning (sequence-specific information), as well as how it is modeled (reorganization of frontoparietal cortex is similar to that previously shown following physical practice). Therefore, on a more fine-grained neural level than demonstrated previously, our findings reveal how the representational structure of frontoparietal cortex maps visual information onto motor circuits in order to enhance motor performance.SIGNIFICANCE STATEMENT Learning by watching others is a cornerstone in the development of expertise and skilled behavior. However, it remains unclear how visual signals are mapped onto motor circuits for such learning to occur. Here, we show that observed action sequences are modeled by distinct patterns of activity in frontoparietal cortex and that such representations largely generalize to very similar, but untrained, sequences. These findings advance our understanding of what is modeled during observational learning (sequence-specific information), as well as how it is modeled (reorganization of frontoparietal cortex is similar to that previously shown following physical practice). More generally, these findings demonstrate how motor circuit involvement in the perception of action sequences shows high fidelity to prior work, which focused on physical performance of action sequences

    Timing is everything: Dance aesthetics depend on the complexity of movement kinematics

    Get PDF
    What constitutes a beautiful action? Research into dance aesthetics has largely focussed on subjective features like familiarity with the observed movement, but has rarely studied objective features like speed or acceleration. We manipulated the kinematic complexity of observed actions by creating dance sequences that varied in movement timing, but not in movement trajectory. Dance-naĂŻve participants rated the dance videos on speed, effort, reproducibility, and enjoyment. Using linear mixed-effects modeling, we show that faster, more predictable movement sequences with varied velocity profiles are judged to be more effortful, less reproducible, and more aesthetically pleasing than slower sequences with more uniform velocity profiles. Accordingly, dance aesthetics depend not only on which movements are being performed but on how movements are executed and linked into sequences. The aesthetics of movement timing may apply across culturally-specific dance styles and predict both preference for and perceived difficulty of dance, consistent with information theory and effort heuristic accounts of aesthetic appreciation

    From social brains to social robots: applying neurocognitive insights to human-robot interaction

    Get PDF
    Amidst the fourth industrial revolution, social robots are resolutely moving from fiction to reality. With sophisticated artificial agents becoming ever more ubiquitous in daily life, researchers across different fields are grappling with the questions concerning how humans perceive and interact with these agents and the extent to which the human brain incorporates intelligent machines into our social milieu. This theme issue surveys and discusses the latest findings, current challenges and future directions in neuroscience- and psychology-inspired human–robot interaction (HRI). Critical questions are explored from a transdisciplinary perspective centred around four core topics in HRI: technical solutions for HRI, development and learning for HRI, robots as a tool to study social cognition, and moral and ethical implications of HRI. Integrating findings from diverse but complementary research fields, including social and cognitive neurosciences, psychology, artificial intelligence and robotics, the contributions showcase ways in which research from disciplines spanning biological sciences, social sciences and technology deepen our understanding of the potential and limits of robotic agents in human social life

    Body Form Modulates the Prediction of Human and Artificial Behaviour from Gaze Observation

    Get PDF
    The future of human–robot collaboration relies on people’s ability to understand and predict robots' actions. The machine-like appearance of robots, as well as contextual information, may influence people’s ability to anticipate the behaviour of robots. We conducted six separate experiments to investigate how spatial cues and task instructions modulate people’s ability to understand what a robot is doing. Participants observed goal-directed and non-goal directed gaze shifts made by human and robot agents, as well as directional cues displayed by a triangle. We report that biasing an observer's attention, by showing just one object an agent can interact with, can improve people’s ability to understand what humanoid robots will do. Crucially, this cue had no impact on people’s ability to predict the upcoming behaviour of the triangle. Moreover, task instructions that focus on the visual and motor consequences of the observed gaze were found to influence mentalising abilities. We suggest that the human-like shape of an agent and its physical capabilities facilitate the prediction of an upcoming action. The reported findings expand current models of gaze perception and may have important implications for human–human and human–robot collaboration

    Mind meets machine: towards a cognitive science of human–machine interactions

    Get PDF
    As robots advance from the pages and screens of science fiction into our homes, hospitals, and schools, they are poised to take on increasingly social roles. Consequently, the need to understand the mechanisms supporting human–machine interactions is becoming increasingly pressing. We introduce a framework for studying the cognitive and brain mechanisms that support human–machine interactions, leveraging advances made in cognitive neuroscience to link different levels of description with relevant theory and methods. We highlight unique features that make this endeavour particularly challenging (and rewarding) for brain and behavioural scientists. Overall, the framework offers a way to study the cognitive science of human–machine interactions that respects the diversity of social machines, individuals’ expectations and experiences, and the structure and function of multiple cognitive and brain systems
    • 

    corecore